With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision. While a variety of new tasks and algorithms have been proposed recently, there are growing hunger for data resources involved in high-quality data, fine-grained labels, and diverse environments. In this paper, we present FLAG3D, a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments. Extensive experiments and in-depth analysis show that FLAG3D contributes great research value for various challenges, such as cross-domain human action recognition, dynamic human mesh recovery, and language-guided human action generation. Our dataset and source code will be publicly available at https://andytang15.github.io/FLAG3D.
translated by 谷歌翻译
Achieving multiple genres and long-term choreography sequences from given music is a challenging task, due to the lack of a multi-genre dataset. To tackle this problem,we propose a Multi Art Genre Intelligent Choreography Dataset (MagicDance). The data of MagicDance is captured from professional dancers assisted by motion capture technicians. It has a total of 8 hours 3D motioncapture human dances with paired music, and 16 different dance genres. To the best of our knowledge, MagicDance is the 3D dance dataset with the most genres. In addition, we find that the existing two types of methods (generation-based method and synthesis-based method) can only satisfy one of the diversity and duration, but they can complement to some extent. Based on this observation, we also propose a generation-synthesis choreography network (MagicNet), which cascades a Diffusion-based 3D Diverse Dance fragments Generation Network (3DGNet) and a Genre&Coherent aware Retrieval Module (GCRM). The former can generate various dance fragments from only one music clip. The latter is utilized to select the best dance fragment generated by 3DGNet and switch them into a complete dance according to the genre and coherent matching score. Quantitative and qualitative experiments demonstrate the quality of MagicDance, and the state-of-the-art performance of MagicNet.
translated by 谷歌翻译
This paper presents SimVTP: a Simple Video-Text Pretraining framework via masked autoencoders. We randomly mask out the spatial-temporal tubes of input video and the word tokens of input text and then feed them into a unified autencoder to reconstruct the missing pixels and words. Our SimVTP has several properties: 1) Thanks to the unified autoencoder, SimVTP reconstructs the masked signal of one modality with the help from another modality, which implicitly learns the cross-modal alignment between video tubes and text tokens. 2) SimVTP not only benefits from a high video masking ratio (e.g. 90%) due to the temporal redundancy of video, but also needs a high text masking ratio (e.g. 75%), which is much higher than BERT (e.g. 15%), to achieve optimal performance. This is because the aid of video modality makes text reconstruction less challenging, which thus needs a higher mask ratio to make the pretext harder for useful feature learning. 3) Equipping SimVTP with video-text contrastive learning (VTC) and video-text matching (VTM), which are two commonly used cross-modal training strategies, could further improve the transferable performance significantly. 4) SimVTP is dataefficent, e.g., pre-training only on 10% data of WebVid-2M, SimVTP achieves surprisingly good results (43.8 R@1) on MSRVTT, which is far above recent state-of-the-art methods pre-trained on both CC3M and WebVid-2M. We transfer our pre-trained model to various downstream tasks and achieve superior performance. The codes and models will be released at https://github.com/mayuelala/SimVTP.
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译
We present state advantage weighting for offline reinforcement learning (RL). In contrast to action advantage $A(s,a)$ that we commonly adopt in QSA learning, we leverage state advantage $A(s,s^\prime)$ and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Furthermore, our method shows good generalization capability when transferring from offline to online.
translated by 谷歌翻译
在对地下地震成像的研究中,求解声波方程是现有模型中的关键成分。随着深度学习的发展,神经网络通过学习输入和方程解决方案之间的映射,特别是波动方程式,将神经网络应用于数值求解部分微分方程,因为如果要花很多时间,传统方法可能会很耗时解决了。以前专注于通过神经网络解决波动方程的工作考虑单个速度模型或多个简单速度模型,这在实践中受到限制。因此,受操作员学习的构想的启发,这项工作利用了傅立叶神经操作员(FNO)在可变速度模型的背景下有效地学习频域地震波场。此外,我们提出了一个与傅立叶神经操作员(PFNO)并行的新框架,以有效地训练基于FNO的求解器,给定多个源位置和频率。数值实验证明了OpenFWI数据集中使用复杂速度模型的FNO和PFNO的高精度。此外,跨数据集泛化测试验证了PFNO适应过分速度模型的。同样,在标签中存在随机噪声的情况下,PFNO具有强大的性能。最后,与传统的有限差异方法相比,PFNO在大规模测试数据集上接受了更高的计算效率。上述优势赋予了基于FNO的求解器的潜力,可以为地震波研究建立强大的模型。
translated by 谷歌翻译
在动态环境中,持续增强学习(CRL)的关键挑战是,随着环境在其生命周期的变化,同时最大程度地减少对学习的信息的灾难性忘记,随着环境在其一生中的变化而变化。为了应对这一挑战,在本文中,我们提出了Dacorl,即动态自动持续RL。 Dacorl使用渐进式上下文化学习了上下文条件条件的策略,该策略会逐步将动态环境中的一系列固定任务群集成一系列上下文,并选择一个可扩展的多头神经网络以近似策略。具体来说,我们定义了一组具有类似动力学的任务,并将上下文推理形式化为在线贝叶斯无限高斯混合物集群的过程,这些过程是在环境特征上,诉诸在线贝叶斯推断,以推断上下文的后端分布。在以前的中国餐厅流程的假设下,该技术可以将当前任务准确地分类为先前看到的上下文,或者根据需要实例化新的上下文,而无需依靠任何外部指标来提前向环境变化发出信号。此外,我们采用了可扩展的多头神经网络,其输出层与新实例化的上下文同步扩展,以及一个知识蒸馏正规化项来保留学习任务的性能。作为一个可以与各种深度RL算法结合使用的一般框架,Dacorl在稳定性,整体性能和概括能力方面具有一致的优势,而不是现有方法,这是通过对几种机器人导航和Mujoco Socomotion任务进行的广泛实验来验证的。
translated by 谷歌翻译
自动射线照相报告生成是一项具有挑战性的跨域任务,旨在自动生成准确和语义辅助报告以描述医学图像。尽管该领域最近取得了进展,但至少在以下方面仍然存在许多挑战。首先,射线照相图像彼此非常相似,因此很难像许多现有方法一样,使用CNN作为视觉特征提取器捕获细粒度的视觉差异。此外,语义信息已被广泛应用以提高发电任务的性能(例如图像字幕),但现有方法通常无法提供有效的医学语义功能。为了解决这些问题,在本文中,我们提出了一个记忆启动的稀疏注意区块,利用双线性池来捕获输入细粒图像特征之间的高阶相互作用,同时产生稀疏的注意力。此外,我们介绍了一个新颖的医学概念生成网络(MCGN),以预测细粒的语义概念,并将其纳入报告生成过程中。我们提出的方法在最近发布的最大基准Mimic-CXR上显示出有希望的性能。它的表现优于图像字幕和医疗报告生成中的多种最新方法。
translated by 谷歌翻译
我们提出了一个新颖的范式,该范式是通过单眼视频输入来构建可动画的3D人类代表,以便可以以任何看不见的姿势和观点呈现。我们的方法基于由基于网格的参数3D人类模型操纵的动态神经辐射场(NERF),该模型用作几何代理。以前的方法通常依靠多视频视频或准确的3D几何信息作为其他输入;此外,大多数方法在概括地看不见的姿势时会降解质量。我们确定概括的关键是查询动态NERF的良好输入嵌入:良好的输入嵌入应定义完整量化空间中的注入映射,并在姿势变化下表面网格变形引导。基于此观察结果,我们建议将输入查询嵌入其与局部表面区域的关系,并在网格顶点上跨越一组地球的最近邻居跨越。通过包括位置和相对距离信息,我们的嵌入式定义了距离保存的变形映射,并可以很好地概括为看不见的姿势。为了减少对其他输入的依赖性,我们首先使用现成的工具初始化人均3D网格,然后提出一条管道以共同优化NERF并完善初始网格。广泛的实验表明,我们的方法可以在看不见的姿势和观点下合成合理的人类渲染结果。
translated by 谷歌翻译
在大多数现实世界中的推荐方案中,多种行为(例如,单击,添加到购物车,采购等)的多类型,这对于学习用户的多方面偏好是有益的。由于多种类型的行为明确表现出依赖性,因此有效地对复杂行为依赖性建模对于多行为预测至关重要。最先进的多行为模型以所有历史互动为输入都没有区别地学习行为依赖性。但是,不同的行为可能反映了用户偏好的不同方面,这意味着某些无关的互动可能会像预测目标行为的声音一样发挥作用。为了解决上述局限性,我们向多行为建议介绍了多功能学习。更具体地说,我们提出了一种新颖的粗到五个知识增强的多功能学习(CKML)框架,以学习不同行为的共享和特定于行为的利益。 CKML引入了两个高级模块,即粗粒兴趣提取(CIE)和细粒度的行为相关性(FBC),它们共同起作用以捕获细粒度的行为依赖性。 CIE使用知识感知信息来提取每个兴趣的初始表示。 FBC结合了动态路由方案,以在兴趣之间进一步分配每个行为。此外,我们使用自我注意机制在兴趣水平上将不同的行为信息相关联。三个现实世界数据集的经验结果验证了我们模型在利用多行为数据方面的有效性和效率。进一步的实验证明了每个模块的有效性以及多行为数据共享和特定建模范式的鲁棒性和优越性。
translated by 谷歌翻译